Unsupervised domain adaptation with progressive adaptation of subspaces
نویسندگان
چکیده
Unsupervised Domain Adaptation (UDA) aims to classify unlabeled target domain by transferring knowledge from labeled source with shift. Most of the existing UDA methods try mitigate adverse impact induced shift via reducing discrepancy. However, such approaches easily suffer a notorious mode collapse issue due lack labels in domain. Naturally, one effective ways this is reliably estimate pseudo for domain, which itself hard. To overcome this, we propose novel method named Progressive Subspaces approach (PAS) utilize an intuition that appears much reasonable gradually obtain reliable labels. Speci fically, progressively and steadily refine shared subspaces as bridge transfer adaptively anchoring/selecting leveraging those samples Subsequently, refined can turn provide more pseudo-labels making highly mitigated. Our thorough evaluation demonstrates PAS not only common UDA, but also outperforms state-of-the arts challenging Partial (PDA) situation, where label set subsumes one.
منابع مشابه
Domain Adaptation with Coupled Subspaces
Domain adaptation algorithms address a key issue in applied machine learning: How can we train a system under a source distribution but achieve high performance under a different target distribution? We tackle this question for divergent distributions where crucial predictive target features may not even have support under the source distribution. In this setting, the key intuition is that that...
متن کاملUnsupervised Transductive Domain Adaptation
Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address t...
متن کاملUnsupervised Domain Adaptation with Feature Embeddings
Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches often require the specification of “pivot features” that generalize across domains, which are selected by task-specific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature template structure common in NLP problems.
متن کاملUnsupervised Domain Adaptation with Residual Transfer Networks
The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source doma...
متن کاملUnsupervised Multi-Domain Adaptation with Feature Embeddings
Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches have two major weaknesses. First, they often require the specification of “pivot features” that generalize across domains, which are selected by taskspecific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature tem...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Pattern Recognition
سال: 2022
ISSN: ['1873-5142', '0031-3203']
DOI: https://doi.org/10.1016/j.patcog.2022.108918